大多数现有的深神经网络都是静态的,这意味着它们只能以固定的复杂性推断。但资源预算可以大幅度不同。即使在一个设备上,实惠预算也可以用不同的场景改变,并且对每个所需预算的反复培训网络是非常昂贵的。因此,在这项工作中,我们提出了一种称为Mutualnet的一般方法,以训练可以以各种资源约束运行的单个网络。我们的方法列举了具有各种网络宽度和输入分辨率的模型配置队列。这种相互学习方案不仅允许模型以不同的宽度分辨率配置运行,而且还可以在这些配置之间传输独特的知识,帮助模型来学习更强大的表示。 Mutualnet是一般的培训方法,可以应用于各种网络结构(例如,2D网络:MobileNets,Reset,3D网络:速度,X3D)和各种任务(例如,图像分类,对象检测,分段和动作识别),并证明了实现各种数据集的一致性改进。由于我们只培训了这一模型,它对独立培训多种型号而言,它也大大降低了培训成本。令人惊讶的是,如果动态资源约束不是一个问题,则可以使用Mutualnet来显着提高单个网络的性能。总之,Mutualnet是静态和自适应,2D和3D网络的统一方法。代码和预先训练的模型可用于\ url {https://github.com/tayang1122/mutualnet}。
translated by 谷歌翻译
This work builds on the models and concepts presented in part 1 to learn approximate dictionary representations of Koopman operators from data. Part I of this paper presented a methodology for arguing the subspace invariance of a Koopman dictionary. This methodology was demonstrated on the state-inclusive logistic lifting (SILL) basis. This is an affine basis augmented with conjunctive logistic functions. The SILL dictionary's nonlinear functions are homogeneous, a norm in data-driven dictionary learning of Koopman operators. In this paper, we discover that structured mixing of heterogeneous dictionary functions drawn from different classes of nonlinear functions achieve the same accuracy and dimensional scaling as the deep-learning-based deepDMD algorithm. We specifically show this by building a heterogeneous dictionary comprised of SILL functions and conjunctive radial basis functions (RBFs). This mixed dictionary achieves the same accuracy and dimensional scaling as deepDMD with an order of magnitude reduction in parameters, while maintaining geometric interpretability. These results strengthen the viability of dictionary-based Koopman models to solving high-dimensional nonlinear learning problems.
translated by 谷歌翻译
Koopman operators model nonlinear dynamics as a linear dynamic system acting on a nonlinear function as the state. This nonstandard state is often called a Koopman observable and is usually approximated numerically by a superposition of functions drawn from a dictionary. In a widely used algorithm, Extended Dynamic Mode Decomposition, the dictionary functions are drawn from a fixed class of functions. Recently, deep learning combined with EDMD has been used to learn novel dictionary functions in an algorithm called deep dynamic mode decomposition (deepDMD). The learned representation both (1) accurately models and (2) scales well with the dimension of the original nonlinear system. In this paper we analyze the learned dictionaries from deepDMD and explore the theoretical basis for their strong performance. We explore State-Inclusive Logistic Lifting (SILL) dictionary functions to approximate Koopman observables. Error analysis of these dictionary functions show they satisfy a property of subspace approximation, which we define as uniform finite approximate closure. Our results provide a hypothesis to explain the success of deep neural networks in learning numerical approximations to Koopman operators. Part 2 of this paper will extend this explanation by demonstrating the subspace invariant of heterogeneous dictionaries and presenting a head-to-head numerical comparison of deepDMD and low-parameter heterogeneous dictionary learning.
translated by 谷歌翻译
Cloud computing holds the promise of reduced costs through economies of scale. To realize this promise, cloud computing vendors typically solve sequential resource allocation problems, where customer workloads are packed on shared hardware. Virtual machines (VM) form the foundation of modern cloud computing as they help logically abstract user compute from shared physical infrastructure. Traditionally, VM packing problems are solved by predicting demand, followed by a Model Predictive Control (MPC) optimization over a future horizon. We introduce an approximate formulation of an industrial VM packing problem as an MILP with soft-constraints parameterized by the predictions. Recently, predict-and-optimize (PnO) was proposed for end-to-end training of prediction models by back-propagating the cost of decisions through the optimization problem. But, PnO is unable to scale to the large prediction horizons prevalent in cloud computing. To tackle this issue, we propose the Predict-and-Critic (PnC) framework that outperforms PnO with just a two-step horizon by leveraging reinforcement learning. PnC jointly trains a prediction model and a terminal Q function that approximates cost-to-go over a long horizon, by back-propagating the cost of decisions through the optimization problem \emph{and from the future}. The terminal Q function allows us to solve a much smaller two-step horizon optimization problem than the multi-step horizon necessary in PnO. We evaluate PnO and the PnC framework on two datasets, three workloads, and with disturbances not modeled in the optimization problem. We find that PnC significantly improves decision quality over PnO, even when the optimization problem is not a perfect representation of reality. We also find that hardening the soft constraints of the MILP and back-propagating through the constraints improves decision quality for both PnO and PnC.
translated by 谷歌翻译
We consider the problem of multi-agent navigation and collision avoidance when observations are limited to the local neighborhood of each agent. We propose InforMARL, a novel architecture for multi-agent reinforcement learning (MARL) which uses local information intelligently to compute paths for all the agents in a decentralized manner. Specifically, InforMARL aggregates information about the local neighborhood of agents for both the actor and the critic using a graph neural network and can be used in conjunction with any standard MARL algorithm. We show that (1) in training, InforMARL has better sample efficiency and performance than baseline approaches, despite using less information, and (2) in testing, it scales well to environments with arbitrary numbers of agents and obstacles.
translated by 谷歌翻译
Rates of missing data often depend on record-keeping policies and thus may change across times and locations, even when the underlying features are comparatively stable. In this paper, we introduce the problem of Domain Adaptation under Missingness Shift (DAMS). Here, (labeled) source data and (unlabeled) target data would be exchangeable but for different missing data mechanisms. We show that when missing data indicators are available, DAMS can reduce to covariate shift. Focusing on the setting where missing data indicators are absent, we establish the following theoretical results for underreporting completely at random: (i) covariate shift is violated (adaptation is required); (ii) the optimal source predictor can perform worse on the target domain than a constant one; (iii) the optimal target predictor can be identified, even when the missingness rates themselves are not; and (iv) for linear models, a simple analytic adjustment yields consistent estimates of the optimal target parameters. In experiments on synthetic and semi-synthetic data, we demonstrate the promise of our methods when assumptions hold. Finally, we discuss a rich family of future extensions.
translated by 谷歌翻译
数据驱动的湍流建模正在经历数据科学算法和硬件开发后的兴趣激增。我们讨论了一种使用可区分物理范式的方法,该方法将已知的物理学与机器学习结合起来,以开发汉堡湍流的闭合模型。我们将1D汉堡系统视为一种原型测试问题,用于建模以对流为主的湍流问题中未解决的术语。我们训练一系列模型,这些模型在后验损失函数上结合了不同程度的物理假设,以测试模型在一系列系统参数(包括粘度,时间和网格分辨率)上的疗效。我们发现,以部分微分方程形式的归纳偏差的约束模型包含已知物理或现有闭合方法会产生高度数据效率,准确和可推广的模型,并且表现优于最先进的基准。以物理信息形式添加结构还为模型带来了一定程度的解释性,可能为封闭建模的未来提供了垫脚石。
translated by 谷歌翻译
机器学习(ML)是指根据大量数据预测有意义的输出或对复杂系统进行分类的计算机算法。 ML应用于各个领域,包括自然科学,工程,太空探索甚至游戏开发。本文的重点是在化学和生物海洋学领域使用机器学习。在预测全球固定氮水平,部分二氧化碳压力和其他化学特性时,ML的应用是一种有前途的工具。机器学习还用于生物海洋学领域,可从各种图像(即显微镜,流车和视频记录器),光谱仪和其他信号处理技术中检测浮游形式。此外,ML使用其声学成功地对哺乳动物进行了分类,在特定的环境中检测到濒临灭绝的哺乳动物和鱼类。最重要的是,使用环境数据,ML被证明是预测缺氧条件和有害藻华事件的有效方法,这是对环境监测的重要测量。此外,机器学习被用来为各种物种构建许多对其他研究人员有用的数据库,而创建新算法将帮助海洋研究界更好地理解海洋的化学和生物学。
translated by 谷歌翻译
本文提出了秤,这是一个一般框架,将公平原则转化为基于约束马尔可夫决策过程(CMDP)的共同表示。借助因果语言,我们的框架可以在决策过程(程序公平)以及决策(结果公平)产生的结果上构成限制。具体而言,我们表明可以将众所周知的公平原理编码为实用程序组件,非毒性组件或鳞片中心中的因果分量。我们使用涉及模拟医疗方案和现实世界中Compas数据集的一组案例研究来说明量表。实验表明,我们的框架产生了公平的政策,这些政策在单步和顺序决策方案中体现了替代公平原则。
translated by 谷歌翻译
从数据中学习的定向无环图(DAG)的组合问题最近被构成了纯连续优化问题,它通过基于矩阵指数函数的痕迹利用DAG的可区分无环表征。现有的无环特征基于以下想法:邻接矩阵的功率包含有关步行和周期的信息。在这项工作中,我们提出了一个基于log-determinant(log-det)函数的$ \ textit {根本不同的} $ acyclicity表征,该功能利用了dags的nilpotency属性。为了处理DAG的固有不对称性,我们将日志数据表征的域与$ \ textit {m-matrices} $的集合联系起来,这是与锥体定义的经典日志函数的关键区别积极的矩阵。与先前提出的无环函数相似,我们的表征也是精确且可区分的。但是,与现有特征相比,我们的对数数据函数:(1)更好地检测大周期; (2)行为更好的梯度; (3)它的运行时间在实践中的数量级更快。从优化侧,我们删除了典型的增强拉格朗日方案,并提出了Dagma($ \ textit {ocyclicity} $的M-矩阵{textIt {定向无环形图),这种方法类似于屏障方法的中心路径。 DAGMA的中心路径中的每个点都是通过我们的log-det函数正常的无约束问题的解决方案,然后我们证明在中心路径的极限下,保证解决方案是DAG。最后,我们为$ \ textit {linear} $和$ \ textit {nonlinear} $ sem提供了广泛的实验,并证明我们的方法可以达到针对最先进方法的大加速和较小的结构锤距。
translated by 谷歌翻译